Bit rates | ||
---|---|---|
Decimal prefixes (SI) | ||
Name | Symbol | Multiple |
kilobit per second | kbit/s | 103 |
megabit per second | Mbit/s | 106 |
gigabit per second | Gbit/s | 109 |
terabit per second | Tbit/s | 1012 |
Binary prefixes (IEC 60027-2) | ||
kibibit per second | Kibit/s | 210 |
mebibit per second | Mibit/s | 220 |
gibibit per second | Gibit/s | 230 |
tebibit per second | Tibit/s | 240 |
In telecommunications and computing, bitrate (sometimes written bit rate, data rate or as a variable R or fb) is the number of bits that are conveyed or processed per unit of time.
The bit rate is quantified using the bits per second (bit/s or bps) unit, often in conjunction with an SI prefix such as kilo- (kbit/s or kbps), mega- (Mbit/s or Mbps), giga- (Gbit/s or Gbps) or tera- (Tbit/s or Tbps). Note that, unlike many other computer-related units, 1 kbit/s is traditionally defined as 1,000 bit/s, not 1,024 bit/s, etc, also before 1999 when SI prefixes were introduced for units of information in the standard IEC 60027-2.
The formal abbreviation for "bits per second" is "bit/s" (not "bits/s", see writing style for SI units). In less formal contexts the abbreviations "b/s" or "bps" are often used, though this risks confusion with "bytes per second" ("B/s", "Bps"). 1 Byte/s (Bps or B/s) correponds to 8 bit/s (bps or b/s).
Contents |
In digital communication systems, the gross bitrate[1], raw bitrate[2], data signaling rate[3] or uncoded transmission rate[2] is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead. The gross bit rate is related to, but should not be confused with, the symbol rate in baud, symbols/s or pulses/s. Gross bit rate can be used interchangeably with "baud" only when each modulation transition of a data transmission system carries exactly one bit of data; something not true for modern modem modulation systems and modern LANs, for example.
For most line codes and modulation methods:
More specifically, a line code representing the data using pulse-amplitude modulation with 2N different voltage levels, or a digital modulation method using 2N different symbols, for example 2N amplitudes, phases or frequencies, can transfer N bit/symbol, or N bit/pulse. This results in:
The exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in:
A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain analog bandwidth in hertz is given by the Nyquist law:
In practice this upper bound can only be approached for line coding schemes (or baseband transmission) and for so-called vestigal sideband digital modulation. Most other digital carrier-modulated schemes (or passband transmission schemes), for example ASK, PSK and QAM, can be characterized as double sideband modulation, resulting in the following approximative relation:
The physical layer net bitrate, peak bitrate, useful bit rate, information rate[1], payload rate, coded transmission rate[2], effective data rate[2] or wire speed (informal language) of a digital communication link is the capacity excluding the physical layer protocol overhead, for example time division multiplex (TDM) framing bits, redundant forward error correction (FEC) codes, equalizer training symbols and other channel coding. Error-correcting codes are common especially in wireless communication systems and broadband modem standards. The relationship between the gross bit rate and net bit rate is affected by the FEC code rate according to the following.
Some operational systems indicate the "connection speed" (informal language) of a network access technology or communication device. The connection speed of a technology that involves forward error correction typically refers to the physical layer net bit rate in accordance with the above definition.
For example, the net bitrate (and thus the "connection speed") of a IEEE 802.11a wireless network is the net bit rate of between 6 and 54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s inclusive of error-correcting codes. The net bit rate of ISDN Basic Rate Interface (2 B-channels + 1 D-channel) of 64+64+16 = 144 kbit/s also refers to the payload data rates, while the signalling rate is 160 kbit/s.
The net bitrate of the Ethernet 100Base-TX physical layer standard is 100 Mbit/s, while the gross bitrate is 125 Mbit/second, due to the 4B5B (four bit over five bit) encoding. In this case, the gross bit rate is equal to the symbol rate or pulse rate of 125 Mbaud, due to the NRZI line code.
In communications technologies without forward error correction and other physical layer protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate. For example, the net as well as gross bit rate of Ethernet 10Base-T is 10 Mbit/s. Due to the Manchester line code, each bit is represented by two pulses, resulting in a pulse rate of 20 Mbaud.
The net bitrate of a V.92 voiceband modem refers to the gross bit rate, since there is no additional error-correction code. It can be up to 56,000 bit/s downstreams and 48,000 bit/s upstreams. A lower bit rate may be chosen during the connection establishment phase due to adaptive modulation - slower but more robust modulation schemes are chosen in case of poor signal-to-noise ratio.
The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the maximum net bitrate, exclusive of forward error correction coding, that is possible without bit errors for a certain physical analog node-to-node communication link.
The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is called Hartley's law. Consequently the net bit rate is sometimes called digital bandwidth capacity in bit/s.
Note that the term line rate in some textbooks is defined as gross bit rate, in others as net bit rate.
The term throughput, essentially the same thing as digital bandwidth consumption, denotes the achieved average useful bit rate in a computer network over a logical or physical communication link or through a network node, typically measured at a reference point above the datalink layer. This implies that the throughput often excludes data link layer protocol overhead. The throughput is affected by the traffic load from the data source in question, as well as from other sources sharing the same network resources. See also Measuring network throughput.
Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to the application layer, exclusive of all protocol overhead, data packets retransmissions, etc. For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate. The file transfer rate in bit/s can be calculated as the file size (in bytes), divided by the file transfer time (in seconds), and multiplied by eight.
As an example, the goodput or data transfer rate of a V.92 voiceband modem is affected by the modem physical layer and data link layer protocols. It is sometimes higher than the physical layer data rate due to V.44 data compression, and sometimes lower due to bit-errors and automatic repeat request retransmissions.
If no data compression is provided by the network equipment or protocols, we have the following relation:
for a certain communication path.
In digital multimedia, bit rate often refers to the number of bits used per unit of playback time to represent a continuous medium such as audio or video after source coding (data compression). The size of a multimedia file in bytes is the product of the bit rate (in bit/s) and the length of the recording (in seconds), divided by eight.
In case of realtime streaming multimedia, this bit rate measure is the goodput that is required to avoid interrupts. For streaming multimedia without interrupts, we have the following relationship:
The term average bitrate is used in case of variable bitrate multimedia source coding schemes.
A theoretical lower bound for the multimedia bit rate for lossless data compression is the source information rate, also known as the entropy rate.
When quantifying large bit rates, SI prefixes (also known as Metric prefixes or Decimal prefixes) are used, thus:
1,000 bit/s | rate = 1 kbit/s (one kilobit or one thousand bits per second) |
1,000,000 bit/s | rate = 1 Mbit/s (one megabit or one million bits per second) |
1,000,000,000 bit/s | rate = 1 Gbit/s (one gigabit or one billion bits per second) |
Binary prefixes have almost never been used for bitrates, although they may occasionally be seen when data rates are expressed in bytes per second (e.g. 1 kByte/s or kBps is sometimes interpreted as 1000 Byte/s, sometimes as 1024 Byte/s). A 1999 IEC standard (IEC 60027-2) specifies different abbreviations for Binary and Decimal (SI) prefixes (e.g. 1 kiB/s = 1024 Byte/s = 8192 bit/s, and 1 MiB/s = 1024 kiB/s), but these are still not very common in the literature, and therefore sometimes it is necessary to seek clarification of the units used in a particular context.
These are examples of physical layer net bit rates in proposed communication standard interfaces and devices:
WAN modems | Ethernet LAN | WiFi WLAN | Mobile data |
---|---|---|---|
|
|
WiFi WLANs |
|
For more examples, see List of device bandwidths.
In digital multimedia, bitrate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors:
Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played.
If lossy data compression is used on audio or visual data, differences from the original signal will be introduced; if the compression is substantial, or lossy data is decompressed and recompressed, this may become noticeable in the form of compression artifacts. Whether these affect the perceived quality, and if so how much, depends on the compression scheme, encoder power, the characteristics of the input data, the listener’s perceptions, the listener's familiarity with artifacts, and the listening or viewing environment.
The bitrates in this section are approximately the minimum that the average listener in a typical listening or viewing environment, when using the best available compression, would perceive as not significantly worse than the reference standard:
For technical reasons (hardware/software protocols, overheads, encoding schemes, etc.) the actual bitrates used by some of the compared-to devices may be significantly higher than what is listed above. For example:
Maximum PC - Do Higher MP3 Bit Rates Pay Off? This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188).
|